![]() DEVICE FOR ENTRY IN PARTICULAR FROM A MOTOR VEHICLE FOR NON-CONTACT SEIZURE OF POSITION AND / OR CHA
专利摘要:
Input device particularly for a motor vehicle having a video sensor (3) for non-contacting input of a position and / or a change of position of at least one finger (8) of the hand (6) of the 'user. The input device recognizes an input according to the position and / or the change of position of at least one finger (8). The input device determines neutral zone (11) in the image captured by the camera that is at least adjacent to a gesture area (12-15). The input device (2) takes into account a change of position in the gesture area (12-15) as input only if the gesture is from the neutral zone (11). 公开号:FR3029655A1 申请号:FR1561811 申请日:2015-12-03 公开日:2016-06-10 发明作者:Markus Langenberg;Boer Gerrit De;Philippe Dreuw;Esther-Sabrina Wacker 申请人:Robert Bosch GmbH; IPC主号:
专利说明:
[0001] FIELD OF THE INVENTION The present invention relates to an input device, in particular a motor vehicle, having a video sensor for the non-contacting capture of a position and / or a change of position of at least one finger of the user's hand, the input device recognizing an input according to the position and / or position change of at least one finger and executing that input. State of the art Currently, vehicles use control concepts with an input device and a display device close to each other. Typically, touch screens and touch pads are used so that the control and display are in the same place. Frequently, the display devices are located in the upper zone of a motor vehicle console or dashboard so that the driver is not obliged to look too strongly away from the traffic for the time being. reading. In other vehicle cases, the touch pad, ie the touch sensitive sensor, is in the driver's dashboard region and the display device is in the usual location in the driver's area. dashboard. The visual feedback to the driver operating the sensor can thus be in the form of the indication of a transparent hand represented by the display device. The driver can thus comfortably maneuver the input device while the display continues to be presented in an advantageous angle of sight. In this case, it is also possible to envisage making the display device not in the form of a screen, but of a head-up display. While touch sensors, conventional or touch pads require the user to touch them to operate, we also know input devices that detect contactless inputs, that is to say, record. Thus, for example, the position of the user's hand, fingers and / or arm in the space is detected using depth sensors and exploited for a gesture maneuver. High resolution for finger gestures is required with sensors such as, for example, so-called flight time sensors, with stereoscopic cameras, structured light, or similar optical means. For gestures of the hand or body we can use sensors with a lower resolution such as radar sensors. With the aid of one or more sensors, the position or the change of position of the hand of the user is entered and, depending on the position entered and / or the change of position, an entry is recognized (sign d). entry) and apply it. By a movement of his finger or by a movement of at least one finger of his hand, the user thus presents the input device he wishes to activate. The input device recognizes the desired input by the movement of the hand / movement of the finger and applies this input in that it executes the order predefined by the movement and for example changes an operating parameter of the motor vehicle. Thus, for example, depending on the position variation position of a user's finger, the signal can be detected to "increase the sound" and execute this signal with the device. input increasing the volume of the sound emitted for example by the sound system of the vehicle. Document US 2012 010 5613 A1 describes the capture of 20 hand or finger gestures by means of a video camera to thereby control the functions of the vehicle according to gesture recognition. EP 2 441 635 A1 discloses a similar system. This document also describes the detection of the time taken by the changes of position of the tip of a finger in space. [0002] DISCLOSURE AND ADVANTAGES OF THE INVENTION The present invention relates to an input device of the type defined above, characterized in that it determines a neutral zone in the image captured by the camera, this zone being at least adjacent to an area of gestures, the input device taking into account as input a change of position in the area of gestures only if the gesture is from the neutral zone. The input device according to the invention has the advantage of distinguishing in a simple manner between the intentional sweeping gestures and the unintentional return movements or the user's movement movements. a simple and economical way in 3029655 3 resources. The image captured by the camera is subdivided into several zones and the movement of the fingers and / or the hand of the user is recognized as corresponding to a gesture or is taken into account to determine the entry if the change of position executed in the respective zone fulfills the conditions or requirements defined in the respective zone. Thus, according to the invention, it is provided, as indicated above, to define a neutral zone in the image captured in the video camera with an adjacent gesture zone; a change of position 10 in the zone of gestures is taken into account only if the gesture comes from the neutral zone. Thus, the user must first place his hand in the neutral zone and to perform a sweeping gesture, move his fingers especially from the neutral zone and make them out into the zone of gestures so that his gesture of The scanning is recognized as an intentional scanning gesture and thereby produces the combined action by the input device. In particular, on a display installation, the position of the user's hand is indicated in the input area of the camera and thus the image captured by the camera is presented at least 20 schematically and more advantageously, the neutral zone and at least one gesture zone are displayed so that the user can easily orient himself and perform the sweeping or wiping action. The size of the neutral zone is preferably chosen so that, in comparison with an intentional wiping or sweeping gesture, the movement gestures which normally are smaller, can be done in the neutral zone. It is then avoided that the gesture of return of the fingers of the hand arriving unintentionally in a zone of gestures, neighbor, is interpreted as representing an entry. It is thus possible for the user to perform unambiguous wiping gestures to activate the input device or to activate, using the input device, other installations of the vehicle. According to a preferred development of the invention, the input device only takes into account the changes of position in the gesture zone as input if these position changes are in a direction coming from the neutral zone, which guarantees that the 3029655 4 movements of the hand or the fingers of the hand are also not considered as input gestures. According to another characteristic, the neutral zone is defined according to the position of the hand in the captured image so that at least the substantially resting hand is essentially substantially in the neutral zone. Thus, movements, especially those of the fingertips are always from the neutral zone or start in the neutral zone. In particular, the neutral zone is of rectangular shape to receive completely the hand of the user. Preferably, the dimension of the neutral zone is larger than a hand captured in the image. This allows for example to take into account the proximity of the hand of the user relative to the camera and increases the possibility of variation and robustness of the system. Since the neutral zone is displayed to the user, this makes it possible to determine the neutral zone, as a variable, as a fixed zone in the image of the video camera. According to a preferential development, the neutral zone is larger than the hand so that the zone of gestures is removed from the hand that is in the neutral zone. As already indicated, this guarantees that to begin a sweeping gesture, it is first necessary to cross a part of the neutral zone until the ends of the fingers arrive in the zone of the gestures so that the gestures unintentional movements or movements are not recognized as entries. [0003] In a particularly preferred manner, the neutral zone is moved with the hand, this notably allows the display of the hand in the image captured by the sensor. As the neutral zone is moved too, this always guarantees that the user starts his wiping gesture when leaving the neutral zone. Preferably, the neutral zone moves at a lower speed than that obtained by comparison with a conventional wiping gesture, which prevents the neutral zone from being moved with the wiping action and that this gesture is not recognized as an entry. Alternatively or additionally, the displaced area may be hooked to a hand-moving point of the hand which is usually slower, such as the tips of the fingers, for performing a wiping action. In particular, it is intended to determine the center of gravity of the hand and to move the neutral zone according to the position / change of position of the center of gravity of the hand. It is especially the center of gravity of the hand, we choose the articulation of the hand. This can be detected by analyzing the image data of the camera in a known manner. With respect to the hand grip, the fingers perform a sweeping gesture much faster so that the difference in speed between the fingers and the hand or finger joint and the center of gravity of the hand allow to apply the method described above. While the central area is shifted to the maximum speed of the hand or hand center of gravity, the fingers will leave the neutral zone with greater speed and enter the gesture area so that the positional variations that occur there will be taken into account to determine the entry. According to another preferred characteristic, at least two zones of gestures are provided opposite to one another in a way that is adjacent to the neutral zone. The two gesture zones work as indicated above, that is to say, using a sweeping gesture. Thus, starting from the neutral zone, the user can perform a sweeping gesture in two directions, for example to go back to the contextual menu in which he performs the sweeping gesture or to descend by performing the wiping action in the another area of gesture, opposite. Advantageously, on the opposite side to the hand joint, there is also a zone of gestures adjacent to the neutral zone. According to another advantageous characteristic, a change of position is entered in several zones of gestures. Only the change of position for the entry is taken into account if it is done in the area of gestures in which the greatest number of changes of positions has been detected. This further ensures that if the user had to perform a large number of deployment movements, during which he arrives with his hand in an area of the gestures he did not want to achieve, he will nevertheless be able to execute a correct entry. [0004] In a particularly preferred manner, the position changes are based on vector data which define the image captured by the video sensor and which will thus be calculated or determined. It is also possible to define unambiguous directions of motion and to compare them, for example, with a direction of movement authorized to compare gestures with a zone and to allow the rapid operation of positional changes in order to have a rapid detection of the desired inputs. Drawings The present invention will be described in more detail below with the aid of an input device shown in the accompanying drawings, in which: FIG. 1 shows the passenger compartment of a motor vehicle with the device of FIG. In accordance with the invention, FIG. 2 shows the image provided by a camera of the input device. DESCRIPTION OF AN EMBODIMENT FIG. 1 is a schematic representation of the passenger compartment of a non-detailed vehicle, equipped with an input device 2 for entering non-contact orders. The input device comprises for this purpose a sensor camera 3 operating without contact and a display unit 4. The display unit 4 is integrated in the dashboard or the operating console of the vehicle 1. The The display unit 4 is an image screen, in particular a display, and may for example be part of a navigation system or a vehicle sound system of the vehicle 1. display unit 4 alternatively or additionally, as a "head-up" display. The video sensor or camera 3 is preferably a two-dimensional video sensor or camera installation and has a gripping area represented by dashed lines. The video camera is, in this example preferably oriented to be turned towards the front side of the intermediate armrest 5 of the motor vehicle 1. The dashboard 5 itself does not have a clean support surface by which the The driver could introduce his order by sweeping with the hand which is represented only schematically, by contact with the entrance surface to introduce an order. Instead, the input device 2 captures a position and / or change of position of at least one finger of the hand 6 in space and deduces an input to execute. The image data captured by the camera is used as will be described hereinafter for controlling the input device 2. FIG. 2 shows an example of an image captured by the sensor 3 and thus 7. The hand 6 of the user is in the captured image. The input device 2 allows the user to perform scanning gestures in the input area of the sensor 3 to activate the input device 2. The return movements and the debating movements are not taken into account. The method described below takes into account sweeping gestures ergonomically performed from the hand joint. In order to distinguish between the intentional sweeping gestures and the unintentional return movements, the method uses the fact that in the wiping movement the hinge 7 of the hand 6 of the user remains more or less still and the surface 20 of the hand and in particular the fingers 8 of the hand 6 move relative to the joint. If, in the embodiment shown, the sweeping gesture is to the left as indicated by the arrow 9, the scanning movement is on the left side of the joint 7 from right to left. The return movement is therefore from left to right and thus can not first be distinguished from a sweep to the right. But as the movement is only until the arrival in the neutral position or the rest position on the left side of the hinge 7 of the hand instead of the right side, this return movement is distinguished from the sweeping movement Intensive to the right. The condition is that the scanning gestures are made with respect to a reference point that is as fixed as possible, for example in the present case, the articulation 7 and that the return of the hand does not exceed the neutral position or not significantly exceed. [0005] The cross represented on the hand in FIG. 2 explains the relatively rigid center of the hand or the center of gravity of the hand 6. A neutral zone 11 is associated with the center of gravity 10 in the captured image. The neutral zone 11 of this embodiment has a rectangular shape. This central zone is oriented and arranged so that the hand 6 is practically in the neutral zone 11. The changes of position or the movements made in the neutral zone 11 are ignored by the signal input device 2. When a gesture of The center of gravity 10 is also devel- oped, but in a less pronounced manner in its expression and amplitude than the surface of the hand with the fingers 8 of the same hand. 6. A neutral zone 11 sufficiently large around the center of gravity 10 gives the signal input device 2 additional strength because the neutral position is presented by an extended area in the space and thus the unintended scanning gestures are not recognized or are not used for an entry. Four gesture zones 12, 13, 14, 15 are adjacent to the neutral zone 11. These gesture zones are each adjacent to one side of the neutral zone 11. The gesture zones 12-15 in the present example are also rectangular shape. To perform a sweeping gesture to the left, the surface of the hand with the fingers 8 must leave the central region or the neutral zone 11 and pass into the left image zone, that is to say in the zone 12. If in this image zone a position change is detected which is in the direction coming from the neutral zone 11 according to the arrow 9, the scanning gesture will be detected as a sweeping gesture on the left. . For the return movement in the same area of gestures 12, a movement to the right will be detected. The signal device 2 recognizes as input into the zone of gestures 12 only the 30 positional variations which are not made from a neutral zone 11. A movement to the right in the gesture zone 12, is thus a unauthorized movement, which is not taken into account by the signal input device 2 upon detection of an input. Similarly, for a sweeping movement to the right, the return movement is entered in the gesture zone 14 while a movement towards the left corresponds to a return movement in the gesture zone. 14, will not be allowed and so it will not be taken into account. The gesture areas 12-15 can thus be considered as one-way paths for the authorized scanning direction. [0006] The zones of gestures 12 and 14 are situated on either side of the neutral zone 11. If several zones of the gestures, in particular in zones of gestures 12 and 14 which face each other, they give rise to the detection of a change of each position in the authorized direction, advantageously only changes in positions in the area of gestures 12 or 14 are taken into account by the detection of the entry and the majority of movements or changes of position are detected. To carry out the method, the signal device 2 has a non-detailed calculation unit which relies on the video image of the sensor 3 to perform motion detection. For this, the optical flux in the image is determined. As a result, a set of vectors is obtained which, for certain image points in the image as indicated in FIG. 2, give an instant t, the offset in the image plane at time t + 1. This set of vectors is hereinafter referred to as the 2D data stream and represents the movement of the hand 6 in the sensor input zone 3. The calculation unit applies an algorithm stored in a fixed memory and which is based on the movement pattern in each step with respect to a reference point as rigid as possible of the hand namely the center of gravity 10 defined above. This is preferably done in that from a first resultant flux field is filtered by a bandpass filter, firstly according to the length of the flux vector (intensity of movement). Other calculations only consider flow vectors whose length exceeds a first threshold Si while remaining below a second threshold S2. The thresholds are obtained based on the flux field statistic automatically. But we can choose them otherwise, for example in that they correspond to a limit of 5 ') / 0 or 95') / 0. This means that 5% of the flux vectors have in the flux field a length less than the first threshold S1 and 5% of the flux vectors have a length greater than the second threshold S2. The first threshold Si eliminates the noise of the movement, for example, caused by the noise of the pixels. In addition, the filtering with the second threshold S2 eliminates relatively large movements of fingertips that occur for wiping with respect to the center of the hand, less mobile (center of gravity 10). The center of gravity of the flux field is calculated by applying the following relation: ## EQU1 ## In this formula, N denotes the number of flux vectors in the filtered flux field p (xi, yi) represents the end point of the flow field. respective flow vector j in the image coordinates. To improve the robustness or the inertia of the calculated center of gravity 10, filtering is also performed as a function of time. This means that in a fixedly defined sliding window which corresponds for example to ten time steps, the arithmetic mean of the center of gravity formed in this time window is formed. Alternatively, one can also provide more complex filtering methods such as for example the application of the Gaussian filter to further increase the robustness. [0007] With respect to the fixed center of gravity 10, the image coordinates of the different gesture ranges and the neutral zone are then determined. The optimal extension of the neutral zone can also be determined dynamically with the filtered flow field. For this, the extension of the filtered flux field as a boundary (border box) is first calculated. The neutral zone 11 around the center of gravity 10 is then determined in connection with the obtained extension of the filtered flux field, for example 90% of the extension in each direction. In the present exemplary embodiment, in FIG. 2, for the right of the center of gravity 10, that is the right hand of the neutral zone 11, a percentage lower than the extension is selected because according to the physiology of the hand, an upward sweeping motion has less amplitude than a sweeping motion in the other directions. The gesture areas 12-15 are defined in that these gesture areas are adjacent to the neutral zone as described above. [0008] The calculation unit also executes an algorithm that detects the gesture area with the instant maximum motion or with the instantaneous maximum number of positional changes. This is advantageously done in that in each step for the gesture areas 12-15, the average length of the flow vectors will be calculated. In addition, the number of flow vectors whose length exceeds a third threshold S3 is determined, to see there a measure of the current share of the fast movements in this zone of the gestures. For this purpose, the third threshold S3 is preferably chosen as a function of the image resolution or the reception resolution of the sensor 3 as well as the distance between the sensor 3 and the interaction plane chosen. constant. For the gesture area in which both the average length of the flow vectors as well as the number of "fast" flow vectors is at maximum, the motion information is then continued. For this, and as a function of all the flux vectors in this zone of gestures, the mean direction of the flux vectors (preferably in degrees) is calculated and reproduced on a dial of the unit circle. Thus, for example, movement to the left corresponds to an angular range of 45 ° -135 °. The downward sweep in the range of gestures corresponds to an angular range of 135 ° -225 °, the sweep to the right in the gesture zone 14 corresponds to an angular range of 225 ° -315 ° and a sweep to the high in the gesture area 13 corresponds to an angular range between 315 ° and 45 °. A gesture alphabet, predefined, that is to say with actions combined with different gestures, associates each of the four zones of gestures 12-15 with one of the four quadrants. To detect sweeping gestures to the left, it is necessary, for example, to detect a movement 30 in an angular range of 45 ° -135 ° in the gesture zone 12. Any other movement in this zone is ignored. This results in the instruction described above in that it operates as an input, only movements in a direction from the neutral zone 11 and for which an input is exploited. This applies analogously to the sweeping movements downwards, to the right and above the gesture zones 13, 14, 15. In general, the association between the scanning directions and the quadrants can be freely parameterized on the 5 unit circle and depends in particular on the practical realizations of the algorithm. In addition, angular resolutions for different directions may be arbitrarily selected in a non-equidistant manner, for example the detection of scanning gestures in certain directions which are non-equidistant and chosen in any way for the detection of scanning gestures in some directions are more or less sensitive. According to the alphabet of the gestures defined in an area with a maximum movement, the authorized direction of movement is determined so that the input device 2 generates the event that corresponds to the gesture or calls or initiates the action associated with the gesture. [0009] Instead of the flow direction in an area of gestures calculated by an average, it is also possible to consider for the calculation of the average flow direction, to weight the flow vectors with their length (intensity of movement). The method presented always assumes that a gesture begins in the neutral zone 11. For this, once the sweeping movement has been carried out, there is a return movement in the neutral zone, which is however intuitively performed by the majority of the users. . If the amplitude or extension of the return movement is such that the movement completely traverses the neutral zone 11, this results in the unwanted detection of the return movement as a gesture in the opposite range of gestures. The robustness of gesture detection can be increased in addition that we model this return movement explicitly. In this case, the corresponding event will not be launched or the corresponding entry will be recognized only if, after a recognized, correct scanning movement, then in a defined time window, an opposite movement (return movement) will be detected. . The detection is thus independent of the different zones 11-15. This makes it possible to ignore in a robust way the return movements whose execution beyond the neutral zone 11 passes in the zone of gestures, neighbor. The size of the time window is preferably chosen according to the detection frequency of the sensor 3 and the desired proximity of the return movement to the actual gesture. One can also consider adapting the window of time to the conditions of individual uses. [0010] By entering the center of gravity 10 of the hand 6, not only a robust neutralization of the return movements to the input detection is covered, but also the detection of any scanning gestures at any point in the input area is covered. 7 of the sensor 3 because the neutral zone 11 moves with the center of gravity 10 10 of the hand 6. As extension we can consider not only to recognize simple unidirectional scanning gestures, the alphabet gestures or inputs to detect, extended to consistent scanning gestures. By dividing the input area into zones 11-15 coupled to the center of gravity 10, it is possible to distinguish, for example, between the left scan gestures; right of a sweep left followed by a return movement; in addition to the directional information, pure, it will be possible to use the location information roughly detected. Thus, an intentional scan with successive, very fast left movements in the range of gestures 12 would be followed by movements to the right in the range of gestures 14 making it possible to detect such a scan with a lower latency in time. Similarly, more complex scanning gestures may be defined as the sum of captured movement directions and corresponding gesture areas. The gestures to be used for different inputs, that is to say those of the alphabet gestures are recorded advantageously before the commissioning in a read-only memory as a model and we can use them for comparison with the gestures currently entered and detect the respective entry. [0011] 30 35 3029655 14 NOMENCLATURE OF MAIN ELEMENTS 1 Motor vehicle 2 Input device / input signal device 5 3 Video sensor 4 Display unit 5 Armrest 6 Hand 7 Hand articulation 10 8 Finger / armrest 9 Arrow 10 Center of gravity of the hand 11 Neutral zone 12-15 Gesture zone 15 20
权利要求:
Claims (8) [0001] CLAIMS 1 °) Input device (2) including a motor vehicle (1) having a video sensor (3) for the non-contacting capture of a position and / or a change of position of at least one finger (8) ) of the hand (6) of the user, the input device (2) recognizing an input according to the position and / or the change of position of at least one finger (8) and executing this input, device characterized in that it determines a neutral zone (11) in the image captured by the camera (3), this area being at least adjacent to a gesture area (12-15), input (2) taking into account as input a change of position in the gesture area (12-15) only if the gesture is from the neutral zone (11). [0002] 2) An input device according to claim 1, characterized in that it takes into account the change of position in the gesture area (1215) as input only if the change of position is in the direction coming from the neutral zone (11). [0003] Input device according to Claim 1, characterized in that, depending on the position of the hand (6) in the captured image, it defines the neutral zone (11) so that the hand (6) at least substantially at rest, is at least substantially in the neutral zone (11). [0004] 4) An input device according to claim 1, characterized in that it defines the neutral zone (11) larger than the hand (6) so that the zone of gestures (12-15) is removed from the hand (6) which is at least substantially at rest. [0005] 5 °) input device according to claim 1, characterized in that 3029655 16 he moves the neutral zone (11) with the hand (6). [0006] Input device according to claim 1, characterized in that it determines the center of gravity (10) of the hand (6) and moves the neutral zone according to the position / change of position of the center of gravity (10). [0007] 7 °) input device according to claim 1, characterized in that it defines the neutral zone (11) adjacent to at least two zones of gestures (12-15) facing each other while being adjacent to the neutral zone (11). [0008] 8 °) input device according to claim 1, characterized in that if it detects a change of position in several areas of gestures (12-15), it takes into account for the entry only changes in position which occur in the gesture area (12-15) in which the greatest number of positional changes are detected. 20
类似技术:
公开号 | 公开日 | 专利标题 FR3029655A1|2016-06-10|DEVICE FOR ENTRY IN PARTICULAR FROM A MOTOR VEHICLE FOR NON-CONTACT SEIZURE OF POSITION AND / OR CHANGE OF POSITION OF AT LEAST ONE FINGER OF A USER'S HAND US9733717B2|2017-08-15|Gesture-based user interface US9477315B2|2016-10-25|Information query by pointing US20190251374A1|2019-08-15|Travel assistance device and computer program EP3109797A1|2016-12-28|Method for recognising handwriting on a physical surface CN105745606A|2016-07-06|Identifying a target touch region of a touch-sensitive surface based on an image EP3332352B1|2019-09-04|Device and method for detecting a parking space that is available for a motor vehicle WO2016079433A1|2016-05-26|Graphical interface and method for managing said graphical interface during the touch-selection of a displayed element FR3029484A1|2016-06-10|METHOD OF INTERACTING FROM THE FLYWHEEL BETWEEN A USER AND AN ON-BOARD SYSTEM IN A VEHICLE FR3023629A1|2016-01-15|INFORMATION PROCESSING APPARATUS FOR DETECTING OBJECT FROM IMAGE, METHOD FOR CONTROLLING APPARATUS, AND STORAGE MEDIUM EP1228423B1|2003-10-22|Method for controlling a touchpad FR3023513A1|2016-01-15|INTERACTION METHOD FOR DRIVING A COMBINED INSTRUMENT OF A MOTOR VEHICLE FR3030798A1|2016-06-24|METHOD FOR MANAGING AN INPUT DEVICE AND INPUT DEVICE APPLIED TO A MOTOR VEHICLE FOR CARRYING OUT THE METHOD EP3274809A1|2018-01-31|Control method, control device, system and motor vehicle comprising such a control device EP3070643A1|2016-09-21|Method and device for object recognition by analysis of digital image signals representative of a scene TW201339948A|2013-10-01|Electronic device and method for capturing image FR3041445A1|2017-03-24|METHOD AND SELECTION BUTTON FOR MOVING A POINTER ON A VIRTUAL OBJECT EP2881841A1|2015-06-10|Method for continuously recognising gestures by a user of a hand-held mobile terminal provided with a motion sensor unit, and associated device EP3393840B1|2019-11-06|Touch-sensitive surface vehicle steering wheel JP5912177B2|2016-04-27|Operation input device, operation input method, and operation input program EP2936284B1|2018-05-30|Interface module making it possible to detect a gesture US10647352B2|2020-05-12|Optical detection of the position of the steering wheel FR3048111A1|2017-08-25|OPTICAL DETECTION OF GESTURES MADE BY THE FINGERS OF A DRIVER JP6188468B2|2017-08-30|Image recognition device, gesture input device, and computer program EP2936282B1|2019-05-08|Interface module
同族专利:
公开号 | 公开日 DE102014224898A1|2016-06-09| CN105759955A|2016-07-13| ITUB20156045A1|2017-06-02| FR3029655B1|2018-11-16| DE202015100273U1|2015-04-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20110118877A1|2009-11-19|2011-05-19|Samsung Electronics Co., Ltd.|Robot system and method and computer-readable medium controlling the same| US20130271397A1|2012-04-16|2013-10-17|Qualcomm Incorporated|Rapid gesture re-engagement|WO2019092386A1|2017-11-13|2019-05-16|Nicand Patrick|Gesture-based control system for actuators| FR3073649A1|2017-11-13|2019-05-17|Frederic Delanoue|GESTURE CONTROL SYSTEM FOR ACTUATORS|EP2441635B1|2010-10-06|2015-01-21|Harman Becker Automotive Systems GmbH|Vehicle User Interface System| US8817087B2|2010-11-01|2014-08-26|Robert Bosch Gmbh|Robust video-based handwriting and gesture recognition for in-car applications| CN102436301B|2011-08-20|2015-04-15|Tcl集团股份有限公司|Human-machine interaction method and system based on reference region and time domain information| JP5593339B2|2012-02-07|2014-09-24|日本システムウエア株式会社|Gesture recognition device using a steering wheel of an automobile, hand recognition method and program thereof| CN102662557B|2012-03-07|2016-04-13|上海华勤通讯技术有限公司|Mobile terminal and unlock method| JP6030430B2|2012-12-14|2016-11-24|クラリオン株式会社|Control device, vehicle and portable terminal|DE102015015067A1|2015-11-20|2017-05-24|Audi Ag|Motor vehicle with at least one radar unit| DE102016206142A1|2016-04-13|2017-10-19|Volkswagen Aktiengesellschaft|User interface, means of locomotion and method of detecting a hand of a user|
法律状态:
2016-12-21| PLFP| Fee payment|Year of fee payment: 2 | 2017-12-19| PLFP| Fee payment|Year of fee payment: 3 | 2018-01-05| PLSC| Publication of the preliminary search report|Effective date: 20180105 | 2019-12-19| PLFP| Fee payment|Year of fee payment: 5 | 2020-12-17| PLFP| Fee payment|Year of fee payment: 6 | 2021-12-15| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 DE102014224898.1|2014-12-04| DE102014224898.1A|DE102014224898A1|2014-12-04|2014-12-04|Method for operating an input device, input device| DE201520100273|DE202015100273U1|2014-12-04|2015-01-22|input device| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|